127 research outputs found

    Particle Separation Using Electrokinetically-Driven Deterministic Lateral Displacement: A Computational Study

    Get PDF
    Electrokinetically-driven deterministic lateral displacement (e-DLD) is a recently proposed technique for continuous, two-dimensional fractionation of particle suspensions in microfluidic platforms. It utilizes the negative dielectrophoretic force that is induced by the DC electric field gradients formed around an array of regularly spaced posts. While e-DLD devices have been demonstrated to be able to separate particles by size, a fundamental understanding of the separation process and the factors that affect the separation is still lacking. This thesis is aimed to answer these questions using a computational study of electrokinetic particle transport and separation in e-DLD devices. We first numerically prove a continuous, two-dimensional separation of 5 ÎĽm, 10 ÎĽm and 15 ÎĽm-diameter rigid circular particles in an e-DLD device. These particles can be viewed as good mimics of red blood cells, white blood cells and tumor cells, respectively, in blood. A number of features are observed in the kinetics of particles, including directional locking and sharp transitions between migration angles upon variations in the direction of the force, which are advantageous for high-resolution two-dimensional separation.We then discuss several factors that affect the separation of particles in the proposed e-DLD device, such as electric field, forcing angle, post gap ratio, post shape and particle shape. We find that the electric field influences the particle separation by affecting the electric field gradient. The larger electric field, the larger electric field gradient will be. We also investigate the orientation of the driving field with respect to the array of posts and find that, at specific forcing-angles, particles of different sizes migrate in different directions, enabling continuous, two-dimensional separation in electrokinetic flow. Moreover, we study the effect of the post gap ratio on particle separation. The smaller the ratio, the larger the electric field gradient will be around the posts, so particles will more easily get deflected away from the posts due to the enhanced negative dielectrophoretic force. In addition, we find that the shape of posts plays an important role in particle separation. Using equilateral triangular posts, we are able to separate smaller particles as compared to the traditional circular posts under the same conditions. We also look into the effect of particle shape on separation in e-DLD. It is found that an elliptic particle behaves like a smaller sized circular particle due to its preferred orientation in electric field. Therefore, we can easily achieve the separation of circular and elliptic particles with an equal surface area. In the end, we compare e-DLD with the traditional pressure-driven DLD. With the same geometry, e-DLD device is capable of separating much smaller particles. Alternatively, pressure-driven DLD requires a smaller gap size and/or a smaller forcing angle to implement the same particle separation which will make the manufacture harder. Using e-DLD device will considerably ease the DLD device fabrication and shorten the length of the post array

    Continuous Versus Discontinuous Elastic Modulus Distribution in Inverse Problems Based on Finite Element Methods

    Get PDF
    Elasticity imaging, which is also known as Elastography, aims to determine the elastic property distribution of non-homogeneous deformable solids such as soft tissues. This can be done non-destructively using displacement fields measured with medical imaging modalities, such as ultrasound or magnetic resonance imaging. Elasticity imaging can potentially be used to detect tumors based on the stiffness contrast between different materials. This requires the solution of an inverse problem in elasticity. This field has been growing very fast in the past decade. One of the most useful applications of elasticity imaging may be in breast cancer diagnosis, where the tumor could potentially be detected and visualized by its stiffness contrast from its surrounding tissues. In this work the inverse problem will be solved for the shear modulus which is directly related to the Young’s modulus through the Poisson’s ratio. The inverse problem is posed as a constrained optimization problem, where the difference between a computed (predicted) and measured displacement field is minimized. The computed displacement field satisfies the equations of equilibrium. The material is modeled as an isotropic and incompressible material. The present work focuses on assessing the solution of the inverse problem for problem domains defined with a continuous and discontinuous shear modulus distribution. In particular, two problem domains will be considered: 1) a stiff inclusion in a homogeneous background representing a stiff tumor surrounded by soft tissues, 2) a layered ring model representing an arterial wall cross-section. The hypothetical "measured" displacement field for these problem domains will be created by solving the finite element forward problem. Additionally, noise will be added to the displacement field to simulate noisy measured displacement data. According to the results of my thesis work, the potential of the elasticity imaging in the medical field is emerging. The inclusion in problem domain 1, representing a stiffer tumor in a uniform background, can be found and located in the shear modulus reconstructions. Thus, these reconstructed images can potentially be used to detect tumors in the medical field

    Singular Vector Filtering Method for Disturbance Enhancement Mitigation in Active Noise Control Systems

    Get PDF
    In multichannel active noise control systems, when reference signals are correlated, the disturbance enhancement phenomenon is likely to occur, i.e., the resulting sound is enhanced instead of being reduced in some frequency bands, if the filter is designed to minimize the total energy for all frequencies. In previous works, a truncated singular value decomposition method was applied to the system autocorrelation matrix to mitigate the disturbance enhancement due to the correlation of reference signals. Some small singular values and the associated singular vectors are removed, if they are responsible for unwanted disturbance enhancement in some frequency bands. However, some of these removed singular vectors may still contribute to noise control performance in other frequency bands, thus a direct truncation will degrade the noise control performance. In the present work, through an additional filtering process, the set of singular vectors that cause the disturbance enhancement are replaced by a set of new singular vectors whose frequency responses are attenuated in the frequency band where disturbance enhancement occurs, while the frequency responses in other frequency bands are unchanged. Compared with truncation, the proposed method can maintain the performance in the noise reduction bands, while mitigating the influence in disturbance enhancement bands

    Truncated Singular Value Decomposition Method for Mitigating Unwanted Enhancement in Active Noise Control Systems

    Get PDF
    It is well-known that good noise cancellation performance can only be realized by a multiple-input active noise control system when the primary noise sources are persistently exciting, and the reference signals are uncorrelated. Otherwise, the noise reduction performance will deteriorate and, quite possibly, the noise can be enhanced. In particular, when the reference signals are correlated in a certain frequency band, enhancement can occur in that band. In the present work, singular value decomposition was applied to the auto-correlation matrix of the reference signals to analyze this enhancement issue. It was found that the level of enhancement was associated with the small singular values. Also, the enhancement frequency bands were found to be associated with large values of the frequency response of the filters that correspond to the singular vectors associated with the small singular values. According to this analysis, a method that removes the small singular values and associated singular vectors of the auto-correlation matrix was proposed and applied to mitigate the noise enhancement. The designed controllers were experimentally implemented in real time and the experimental performance agreed well with off-line simulation results, in which the noise enhancement was reduced

    InDL: A New Datasets and Benchmark for In-Diagram Logic Interpreting based on Visual Illusion

    Full text link
    This paper introduces a novel approach to evaluating deep learning models' capacity for in-diagram logic interpretation. Leveraging the intriguing realm of visual illusions, we establish a unique dataset, InDL, designed to rigorously test and benchmark these models. Deep learning has witnessed remarkable progress in domains such as computer vision and natural language processing. However, models often stumble in tasks requiring logical reasoning due to their inherent 'black box' characteristics, which obscure the decision-making process. Our work presents a new lens to understand these models better by focusing on their handling of visual illusions -- a complex interplay of perception and logic. We utilize six classic geometric optical illusions to create a comparative framework between human and machine visual perception. This methodology offers a quantifiable measure to rank models, elucidating potential weaknesses and providing actionable insights for model improvements. Our experimental results affirm the efficacy of our benchmarking strategy, demonstrating its ability to effectively rank models based on their logic interpretation ability. As part of our commitment to reproducible research, the source code and datasets will be made publicly available here: \href{https://github.com/rabbit-magic-wh/InDL}{https://github.com/rabbit-magic-wh/InDL}

    Dual-mode adaptive-SVD ghost imaging

    Full text link
    In this paper, we present a dual-mode adaptive singular value decomposition ghost imaging (A-SVD GI), which can be easily switched between the modes of imaging and edge detection. It can adaptively localize the foreground pixels via a threshold selection method. Then only the foreground region is illuminated by the singular value decomposition (SVD) - based patterns, consequently retrieving high-quality images with fewer sampling ratios. By changing the selecting range of foreground pixels, the A-SVD GI can be switched to the mode of edge detection to directly reveal the edge of objects, without needing the original image. We investigate the performance of these two modes through both numerical simulations and experiments. We also develop a single-round scheme to halve measurement numbers in experiments, instead of separately illuminating positive and negative patterns in traditional methods. The binarized SVD patterns, generated by the spatial dithering method, are modulated by a digital micromirror device (DMD) to speed up the data acquisition. This dual-mode A-SVD GI can be applied in various applications, such as remote sensing or target recognition, and could be further extended for multi-modality functional imaging/detection

    Quantitative and dark field ghost imaging with ultraviolet light

    Full text link
    Ultraviolet (UV) imaging enables a diverse array of applications, such as material composition analysis, biological fluorescence imaging, and detecting defects in semiconductor manufacturing. However, scientific-grade UV cameras with high quantum efficiency are expensive and include a complex thermoelectric cooling system. Here, we demonstrate a UV computational ghost imaging (UV-CGI) method to provide a cost-effective UV imaging and detection strategy. By applying spatial-temporal illumination patterns and using a 325 nm laser source, a single-pixel detector is enough to reconstruct the images of objects. To demonstrate its capability for quantitative detection, we use UV-CGI to distinguish four UV-sensitive sunscreen areas with different densities on a sample. Furthermore, we demonstrate dark field UV-CGI in both transmission and reflection schemes. By only collecting the scattered light from objects, we can detect the edges of pure phase objects and small scratches on a compact disc. Our results showcase a feasible low-cost solution for non-destructive UV imaging and detection. By combining it with other imaging techniques, such as hyperspectral imaging or time-resolved imaging, a compact and versatile UV computational imaging platform may be realized for future applications.Comment: 9 pages, 5 figure

    Data-Juicer: A One-Stop Data Processing System for Large Language Models

    Full text link
    The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, diverse, and high-quality data. Despite this, existing open-source tools for LLM data processing remain limited and mostly tailored to specific datasets, with an emphasis on the reproducibility of released data over adaptability and usability, inhibiting potential applications. In response, we propose a one-stop, powerful yet flexible and user-friendly LLM data processing system named Data-Juicer. Our system offers over 50 built-in versatile operators and pluggable tools, which synergize modularity, composability, and extensibility dedicated to diverse LLM data processing needs. By incorporating visualized and automatic evaluation capabilities, Data-Juicer enables a timely feedback loop to accelerate data processing and gain data insights. To enhance usability, Data-Juicer provides out-of-the-box components for users with various backgrounds, and fruitful data recipes for LLM pre-training and post-tuning usages. Further, we employ multi-facet system optimization and seamlessly integrate Data-Juicer with both LLM and distributed computing ecosystems, to enable efficient and scalable data processing. Empirical validation of the generated data recipes reveals considerable improvements in LLaMA performance for various pre-training and post-tuning cases, demonstrating up to 7.45% relative improvement of averaged score across 16 LLM benchmarks and 16.25% higher win rate using pair-wise GPT-4 evaluation. The system's efficiency and scalability are also validated, supported by up to 88.7% reduction in single-machine processing time, 77.1% and 73.1% less memory and CPU usage respectively, and 7.91x processing acceleration when utilizing distributed computing ecosystems. Our system, data recipes, and multiple tutorial demos are released, calling for broader research centered on LLM data.Comment: Under continuous maintenance and updating; The system, refined data recipes, and demos are at https://github.com/alibaba/data-juice
    • …
    corecore